|
Impact evaluation assesses the changes that can be attributed to a particular intervention, such as a project, program or policy, both the intended ones, as well as ideally the unintended ones.〔(World Bank Poverty Group on Impact Evaluation ), accessed on January 6, 2008〕 In contrast to outcome monitoring, which examines whether targets have been achieved, impact evaluation is structured to answer the question: how would outcomes such as participants’ well-being have changed if the intervention had not been undertaken? This involves counterfactual analysis, that is, “a comparison between what actually happened and what would have happened in the absence of the intervention.”〔(White, H. (2006) Impact Evaluation: The Experience of the Independent Evaluation Group of the World Bank, World Bank, Washington, D.C., p. 3 )〕 Impact evaluations seek to answer cause-and-effect questions. In other words, they look for the changes in outcome that are directly attributable to a program.〔(Gertler, Martinez, Premand, Rawlings and Vermeersch (2011) Impact Evaluation in Practice, Washington, DC:The World Bank )〕 Impact evaluation helps people answer key questions for evidence-based policy making: what works, what doesn’t, where, why and for how much? It has received increasing attention in policy making in recent years in both Western and developing country contexts.〔(Briceno, B. and Gaarder, M. (2010) Institutionalisation of government evaluation: Balancing trade-offs )〕 It is an important component of the armory of evaluation tools and approaches and integral to global efforts to improve the effectiveness of aid delivery and public spending more generally in improving living standards.〔('' Muaz, Jalil Mohammad (2013), Practical Guidelines for conducting research. Summarising good research practice in line with the DCED Standard'' )〕 Originally more oriented towards evaluation of social sector programs in developing countries, notably conditional cash transfers, impact evaluation is now being increasingly applied in other areas such as the agriculture, energy and transport. == Counterfactual evaluation designs == Counterfactual analysis enables evaluators to attribute cause and effect between interventions and outcomes. The ‘counterfactual’ measures what would have happened to beneficiaries in the absence of the intervention, and impact is estimated by comparing counterfactual outcomes to those observed under the intervention. The key challenge in impact evaluation is that the counterfactual cannot be directly observed and must be approximated with reference to a comparison group. There are a range of accepted approaches to determining an appropriate comparison group for counterfactual analysis, using either prospective (ex ante) or retrospective (ex post) evaluation design. Prospective evaluations begin during the design phase of the intervention, involving collection of baseline and end-line data from intervention beneficiaries (the ‘treatment group’) and non-beneficiaries (the ‘comparison group’); they may involve selection of individuals or communities into treatment and comparison groups. Retrospective evaluations are usually conducted after the implementation phase and may exploit existing survey data, although the best evaluations will collect data as close to baseline as possible, to ensure comparability of intervention and comparison groups. There are five key principles relating to internal validity (study design) and external validity (generalizability) which rigorous impact evaluations should address: confounding factors, selection bias, spillover effects, contamination, and impact heterogeneity.〔(International Initiative for Impact Evaluation (3ie), Principles for Impact Evaluation )〕 * Confounding occurs where certain factors, typically relating to socio-economic status, are correlated with exposure to the intervention and, independent of exposure, are causally related to the outcome of interest. Confounding factors are therefore alternate explanations for an observed (possibly spurious) relationship between intervention and outcome. * Selection bias, a special case of confounding, occurs where intervention participants are non-randomly drawn from the beneficiary population, and the criteria determining selection are correlated with outcomes. Unobserved factors, which are associated with access to or participation in the intervention, and are causally related to the outcome of interest, may lead to a spurious relationship between intervention and outcome if unaccounted for. Self-selection occurs where, for example, more able or organized individuals or communities, who are more likely to have better outcomes of interest, are also more likely to participate in the intervention. Endogenous program selection occurs where individuals or communities are chosen to participate because they are seen to be more likely to benefit from the intervention. Ignoring confounding factors can lead to a problem of omitted variable bias. In the special case of selection bias, the endogeneity of the selection variables can cause simultaneity bias. * Spillover (referred to as contagion in the case of experimental evaluations) occurs when members of the comparison (control) group are affected by the intervention. * Contamination occurs when members of treatment and/or comparison groups have access to another intervention which also affects the outcome of interest. * Impact heterogeneity refers to differences in impact due by beneficiary type and context. High quality impact evaluations will assess the extent to which different groups (e.g., the disadvantaged) benefit from an intervention as well as the potential effect of context on impact. The degree that results are generalizable will determine the applicability of lessons learned for interventions in other contexts. Impact evaluation designs are identified by the type of methods used to generate the counterfactual and can be broadly classified into three categories – experimental, quasi-experimental and non-experimental designs – that vary in feasibility, cost, involvement during design or after implementation phase of the intervention, and degree of selection bias. White (2006)〔(White, H. (2006) Impact Evaluation: The Experience of the Independent Evaluation Group of the World Bank, World Bank, Washington, D.C. )〕 and Ravallion (2008) 〔(Ravallion, M. (2008) Evaluating Anti-Poverty Programs )〕 discusses alternate Impact Evaluation approaches. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Impact evaluation」の詳細全文を読む スポンサード リンク
|